12 research outputs found

    Detecting Manipulations in Video

    Get PDF
    This chapter presents the techniques researched and developed within InVID for the forensic analysis of videos, and the detection and localization of forgeries within User-Generated Videos (UGVs). Following an overview of state-of-the-art video tampering detection techniques, we observed that the bulk of current research is mainly dedicated to frame-based tampering analysis or encoding-based inconsistency characterization. We built upon this existing research, by designing forensics filters aimed to highlight any traces left behind by video tampering, with a focus on identifying disruptions in the temporal aspects of a video. As for many other data analysis domains, deep neural networks show very promising results in tampering detection as well. Thus, following the development of a number of analysis filters aimed to help human users in highlighting inconsistencies in video content, we proceeded to develop a deep learning approach aimed to analyze the outputs of these forensics filters and automatically detect tampered videos. In this chapter, we present our survey of the state of the art with respect to its relevance to the goals of InVID, the forensics filters we developed and their potential role in localizing video forgeries, as well as our deep learning approach for automatic tampering detection. We present experimental results on benchmark and real-world data, and analyze the results. We observe that the proposed method yields promising results compared to the state of the art, especially with respect to the algorithm’s ability to generalize to unknown data taken from the real world. We conclude with the research directions that our work in InVID has opened for the future

    Detecting Tampered Videos with Multimedia Forensics and Deep Learning

    Get PDF
    © 2019, Springer Nature Switzerland AG. User-Generated Content (UGC) has become an integral part of the news reporting cycle. As a result, the need to verify videos collected from social media and Web sources is becoming increasingly important for news organisations. While video verification is attracting a lot of attention, there has been limited effort so far in applying video forensics to real-world data. In this work we present an approach for automatic video manipulation detection inspired by manual verification approaches. In a typical manual verification setting, video filter outputs are visually interpreted by human experts. We use two such forensics filters designed for manual verification, one based on Discrete Cosine Transform (DCT) coefficients and a second based on video requantization errors, and combine them with Deep Convolutional Neural Networks (CNN) designed for image classification. We compare the performance of the proposed approach to other works from the state of the art, and discover that, while competing approaches perform better when trained with videos from the same dataset, one of the proposed filters demonstrates superior performance in cross-dataset settings. We discuss the implications of our work and the limitations of the current experimental setup, and propose directions for future research in this area

    The InVID Plug-in: Web Video Verification on the Browser

    No full text
    This paper presents a novel open-source browser plug-in that aims at supporting journalists and news professionals in their efforts to verify user-generated video. The plug-in, which is the result of an iterative design thinking methodology, brings together a number of sophisticated multimedia analysis components and third party services, with the goal of speeding up established verification workflows and making it easy for journalists to access the results of different services that were previously used as standalone tools. The tool has been downloaded several hundreds of times and is currently used by journalists worldwide, after being tested by Agence France Presse (AFP) and Deutsche Welle (DW) journalists and media researchers for a few months. The tool has already helped debunk a number of fake videos

    Modelling place memory in crickets

    No full text
    Insects can remember and return to a place of interest using the surrounding visual cues. In previous experiments, we showed that crickets could home to an invisible cool spot in a hot environment. They did so most effectively with a natural scene surround, though they were also able to home with distinct landmarks or blank walls. Homing was not successful, however, when visual cues were removed through a dark control. Here, we compare six different models of visual homing using the same visual environments. Only models deemed biologically plausible for use by insects were implemented. The average landmark vector model and first order differential optic flow are unable to home better than chance in at least one of the visual environments. Second order differential optic flow and GradDescent on image differences can home better than chance in all visual environments, and best in the natural scene environment, but do not quantitatively match the distributions of the cricket data. Two models—centre of mass average landmark vector and RunDown on image differences—could produce the same pattern of results as observed for crickets. Both the models performed best using simple binary images and were robust to changes in resolution and image smoothin
    corecore